Skip to content

Conversation

@buraizu
Copy link
Contributor

@buraizu buraizu commented Nov 20, 2025

What does this PR do? What is the motivation?

  • Document new log setup methods
  • Add associated log-collection information from the main Google Cloud doc

Merge instructions

Merge readiness:

  • Ready for merge

For Datadog employees:

Your branch name MUST follow the <name>/<description> convention and include the forward slash (/). Without this format, your pull request will not pass CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.

If your branch doesn't follow this format, rename it or create a new branch and PR.

[6/5/2025] Merge queue has been disabled on the documentation repo. If you have write access to the repo, the PR has been reviewed by a Documentation team member, and all of the required checks have passed, you can use the Squash and Merge button to merge the PR. If you don't have write access, or you need help, reach out in the #documentation channel in Slack.

Additional notes

@buraizu buraizu requested a review from a team as a code owner November 20, 2025 21:31
@buraizu buraizu added the WORK IN PROGRESS No review needed, it's a wip ;) label Nov 20, 2025
@github-actions
Copy link
Contributor

Preview links (active after the build_preview check completes)

Modified Files

Copy link
Contributor

@tedkahwaji tedkahwaji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we include the necessary GCP & Datadog permissions needed to run QuickStart

**Note**: Only folders and projects that you have the necessary access and permissions for appear in this section. Likewise, folders and projects without a display name do not appear.
1. In the **Dataflow Job Configuration** section, specify configuration options for the Dataflow job:
- Select deployment settings (Google Cloud region and project to host the created resources---Pub/Sub topics and subscriptions, a log routing sink, a Secret Manager entry, a service account, a Cloud Storage bucket, and a Dataflow job)
**Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove this entirely, sorry I know the documentation request doc included this part; but I prefer we keep this part out of the public documentation

- Click **Open Google Cloud Shell** to run the script in the [Google Cloud Shell][102].
1. After running the script, return to the Google Cloud integration tile.
1. In the **Select Projects** section, select the folders and projects to forward logs from. If you select a folder, logs are forwarded from all of its child projects.
**Note**: Only folders and projects that you have the necessary access and permissions for appear in this section. Likewise, folders and projects without a display name do not appear.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have the Notes on their own line

**Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name.
- Select scaling settings (number of workers and maximum workers)
- Select performance settings (maximum number of parallel requests and batch size)
- Select execution options (Streaming Engine is enabled by default; read more about its [benefits][103])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would remove Streaming Engine is enabled by default; read more about its [benefits][103]

- Select scaling settings (number of workers and maximum workers)
- Select performance settings (maximum number of parallel requests and batch size)
- Select execution options (Streaming Engine is enabled by default; read more about its [benefits][103])
**Note**: If you select to enable [Dataflow Prime][104], you cannot configure worker machine type in the **Advanced Configuration** section.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove this part; it's pretty evident in the UI that you can't pick a machine type if dataflow prime is enabled, there is a message box that appears and states that

1. In the **Dataflow Job Configuration** section, specify configuration options for the Dataflow job:
- Select deployment settings (Google Cloud region and project to host the created resources---Pub/Sub topics and subscriptions, a log routing sink, a Secret Manager entry, a service account, a Cloud Storage bucket, and a Dataflow job)
**Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name.
- Select scaling settings (number of workers and maximum workers)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove number of workers here since that's not supported in the terraform flow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

WORK IN PROGRESS No review needed, it's a wip ;)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants